Merging Orchestrator with RHDH chart #109
Merging Orchestrator with RHDH chart #109openshift-merge-bot[bot] merged 22 commits intoredhat-developer:mainfrom
Conversation
c66621f to
e7583f9
Compare
|
@masayag Updated PR |
|
/cc |
|
FYI /cc @durandom |
3e33b4b to
9aa8fb4
Compare
rm3l
left a comment
There was a problem hiding this comment.
@ElaiShalevRH A few comments below.
9aa8fb4 to
3f2d1df
Compare
|
Hey @rm3l, |
3f2d1df to
767e884
Compare
| apiVersion: sonataflow.org/v1alpha08 | ||
| kind: SonataFlowPlatform | ||
| metadata: | ||
| name: sonataflow-platform |
There was a problem hiding this comment.
QQ about this: as I understand it, it is not possible to have 2 SonataFlowPlatform CRs in the same namespace (I noticed a Duplicate error in the second CR status), right?
So this means that it won't be possible to run 2 RHDH Orchestrator instances in the same namespace:
Release "my-backstage-orchestrator-2" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists.
Unable to continue with install: SonataFlowPlatform "sonataflow-platform" in namespace "my-ns" exists and cannot be imported into the current release: invalid ownership metadata;
annotation validation error: key "meta.helm.sh/release-name" must equal "my-backstage-orchestrator-2":
current value is "my-backstage-orchestrator-1"
The problem is that we've had some (valid IMO) use cases where the user had access to a single namespace and deployed 2 instances of RHDH with different version/settings. And I can imagine the same for the Orchestrator flavor.
If this is something that you already aware of, I guess this should be added as a noteworthy note in the README.
(Or, instead of letting Helm fail, maybe check if the resource already exists and skip its creation? In which case, there might be other questions, like what happens to the SonataFlowPlatform CR if I uninstall the first Release but keep the second one?)
There was a problem hiding this comment.
So a couple of things:
- Yes, only one sonataflowplatform can exist in each namespace, even if it's under a different name. I'll add a check to validate if it exists so the Duplicate resource won't be created.
- I believe multiple instanced of backstage in the same namespace, using the same sonataflowplatform should be OK, as long as the backstage versions are the same. The reason for that is that sonataflowplatform is also versioned, and the orchestrator plugin needs compatibillity between the sonataflow version and the rhdh version. an rhdh deployment using a mis-matches SFP version or OSL version will definitely create issues. So different value-backstage is OK but not different version, I think.
Not sure if we should prohibit this behavior or just not support it
There was a problem hiding this comment.
After playing with it some more, removing the first helm release will destroy the sonataflow resources for the other orchestrator instances that exist, and that is a problem...
I'm not sure how we can actually support multiple orchestrator installations in the same namespace
There was a problem hiding this comment.
Okay, I think it is fine to call this out in the docs as a known limitation and leave it as it is for now. But instead of the current error from Helm which is not that clear at first sight, I would suggest, if not too much additional work, returning a better error message letting the user know about this unsupported scenario.
There was a problem hiding this comment.
Returning an error upon a second SFP apply sounds good, though the users can always hack and simply create more rhdh+rchestrator instances, then reconfigure the SFP to point to whatever secret they want. Anyway, will document the recommended way.
There was a problem hiding this comment.
I'll take this with sonataflow team and see if we can file RFE to support more than a single platform per namespace.
There was a problem hiding this comment.
@rm3l is the requirement for having few RHDH instances in the same namespace a production use-case or for testing?
There was a problem hiding this comment.
Not sure if this is something we should recommend in production or not. I've seen this so far in a discussion with one user, where they were deploying two versions of the Helm Chart in their namespace.
And this is a scenario I usually tend to also check on Dev Sandbox for example where I'm limited to a single namespace.
So it most probably makes sense for testing.
There was a problem hiding this comment.
I do not think we should even consider it for production :) , even for testing it is not really clear for testing of what user would need several instances of RHDH in one namespace?
I do not think we offer any multi-RHDH-instance scenario?
|
@rm3l, Moti and I had another issue I wanted to raise. Creating a new one will require some extra steps. The fedora/postgres image does not allow init scripts or additional databases at startup, so a job is needed to exec into the psql intstance and create the new db. This will include helm templates for the job, role binding etc. Using the default db is simple, but might cause issues down the road as it really should be seperated. What do you think? |
Indeed, I'd find it safer to create a new one. But why would you need additional role bindings? A Job that has a psql client and runs |
|
Hey @rm3l |
d322093 to
8053637
Compare
|
|
Hey @rm3l, thanks for the review and the testing. I've added your suggestions and the CI is green now. |
ea0243d
into
redhat-developer:main



(WIP)
This PR will introduce changes to the RHDH helm chart, that will enable the installation of Orchestrator as a flavor of RHDH. This PR will introduce changes to the Chart Values, and will add several templates. Upon enabling orchestrator, new dynamic plugins, network policies, and other CRs will be created.
Tests and documentation are TBA.
Relates to this epic: https://issues.redhat.com/browse/RHIDP-6159
Checklist
Chart.yamlaccording to semver.values.yamland added to the README.md. The pre-commit utility can be used to generate the necessary content. Usepre-commit run -ato apply changes.pre-commithook.ct lintcommand.